Advertisement

Op-Ed: How AI’s growing influence can make humans less moral

AI security cameras using facial recognition technology at the China International Exhibition on Public Safety and Security.
AI security cameras using facial recognition technology are displayed at the China International Exhibition on Public Safety and Security in Beijing in October 2018.
(Nicolas Asfouri / AFP/Getty Images)
Share via

It started out as a social experiment, but it quickly came to a bitter end. Microsoft’s chatbot Tay had been trained to have “casual and playful conversations” on Twitter, but once it was deployed, it took only 16 hours before Tay launched into tirades that included racist and misogynistic tweets.

As it turned out, Tay was mostly repeating the verbal abuse that humans were spouting at it — but the outrage that followed centered on the bad influence that Tay had on people who could see its hateful tweets, rather than on the people whose hateful tweets were a bad influence on Tay.

As children, we are all taught to be good people. Perhaps even more important, we are taught that bad company can corrupt good character — and one bad apple can spoil the bunch.

Advertisement

Today, we increasingly interact with machines powered by artificial intelligence — AI-powered smart toys as well as AI-driven social media platforms that affect our preferences. Could machines be bad apples? Should we avoid the company of bad machines, lest they corrupt us?

The question of how to make AI ethical is front and center in the public debate. For starters, the machine itself must not make unethical decisions: ones that reinforce existing racial and gender biases in hiring, lending, judicial sentencing and in facial detection software deployed by police and other public agencies.

What is less discussed, however, are the ways in which machines might make humans themselves less ethical.

Advertisement

People behave unethically when they can justify it to others, when they observe or believe that others cut ethical corners too, and when they can do so jointly with others (versus alone). In short, the magnetic field of social influence strongly sways people’s moral compass.

Al can also influence people as an advisor that recommends unethical action. Research shows that people will follow dishonesty-promoting advice provided by AI systems as much as they follow similar advice from humans.

Psychologically, an AI advisor can provide a justification to break ethical rules. For example, already AI systems analyze sales calls to boost sales performance. What if such an AI advisor suggests that deceiving customers increases the chances of maximizing profits? As machines become more sophisticated and their advice more knowledgeable and personalized, people are more likely to be persuaded to follow their advice, even if it is counter to their own intuition and knowledge.

Advertisement

Another way AI can influence us is as a role model. If you observe people on social media bullying others and expressing moral outrage, you may be more emboldened to do the same. When AI bots like the chatbot Tay act similarly on social platforms, people can also imitate their behavior.

More troubling is when AI turns into an enabler. People can partner with AI systems to cause harm to others. AI-generated synthetic media facilitate new forms of deception. Generating “deepfakes” — hyper-realistic imitations of audio-visual content — has become increasingly easy. Consequently, from 2019 to 2020, the number of deepfake videos grew from 14,678 to 100 million, a 6,820-fold increase. Using deepfakes, scammers have made phishing calls to employees of companies, imitating the voice of the chief executive. In one case, the damage amounted to over $240,000.

For would-be bad actors, using AI for deception is attractive. Often it is hard to identify the maker or disseminator of the deepfake, and the victim remains psychologically distant. Moreover, recent research reveals that people are overconfident in their ability to detect deepfakes, which makes them particularly susceptible to such attacks. This way, AI systems can turn into compliant “partners in crime” for all those with deceptive purposes — expert scammers as well as ordinary citizens.

Finally, and possibly most concerning, is the harm caused when decisions and actions are outsourced to AI. People can let algorithms act on their behalf, creating new ethical risks. This can occur with tasks as diverse as setting prices in online markets such eBay or Airbnb, questioning criminal suspects or devising a company’s sales strategy. Research reveals that letting algorithms set prices can lead to algorithmic collusion. Those employing AI systems for interrogation may not realize that the autonomous robot interrogation system might threaten torture to achieve a confession. Those using AI-powered sales strategies may not be aware that deceptive tactics are part of the marketing strategies the AI system promotes.

Making use of AI in these cases, of course, differs markedly from outsourcing tasks to fellow humans. For one, the exact workings of an AI system’s decisions are often invisible and incomprehensible. Letting such “black box” algorithms perform tasks on one’s behalf increases ambiguity and plausible deniability, thus blurring the responsibility for any harm caused. And entrusting machines to execute tasks that can hurt people can also make the potential victims seem psychologically distant and abstract.

The dangerous trifecta of opacity, anonymity and distance makes it easier for people to turn a blind eye to what AI is doing, as long as AI provides them with benefits. As a result, whenever AI systems take over a new social role, new risks for corrupting human behavior will emerge. Interacting with and through intelligent machines might exert an equally strong, or even stronger, pull on people’s moral compass than when interacting with other humans.

Advertisement

Instead of rushing to create new AI tools, we need to better understand these risks, and to promote the norms and the laws that will mitigate them. And we cannot simply rely on experience.

Humans have been dealing with bad apples — and bad moral influences — for millennia. But the lessons we learned and the social rules we devised may not apply when the bad apples turn out to be machines. That’s a central problem with AI that we have not begun to solve.

Nils Köbis is a postdoctoral fellow at the Max Planck Institute for Human Development. Iyad Rahwan is managing director of the institute. Jean-François Bonnefon is a research director at the Toulouse School of Economics.

Advertisement